Deepfakes And The Future Of Litigation: Are We Ready?
Seeing will no longer be believing; seeing will require verification.
Seeing will no longer be believing; seeing will require verification.
Lawyers and judges have not yet realized that virtually any piece of evidence, the realism of which we have taken for granted, could now be fake.
The new generation of AI-related legal issues are inherently cross-disciplinary, implicating corporate law, intellectual property, data privacy, employment, corporate governance and regulatory compliance.
Remember when people wanted to pass a 10-year moratorium on any AI regulation?
Deepfakes and evidence created or enhanced by AI are going to become increasingly prevalent. How can we successfully confront the problem?
Can't judge a book without reading its contents.
As federal borrowing caps tighten financing options for law students, one organization is stepping in to negotiate the terms they can't secure alone.
In the past year, deepfakes have become the second most common cybersecurity incident.
Determining the admissibility of videos created using AI tools presents a challenge even for the most technology-adept judges, of which there are relatively few.
Better to get this done sooner rather than later.
Caught red-handed or caught red herring?
Legal and operational leaders are gathering May 6–7 in Fort Lauderdale to confront the questions the industry hasn't answered—with a keynote from Amanda Knox setting the tone.
Tesla is refusing to produce Musk to answer about his recorded safety claims because, they contend, what if he didn't make them?
We make it too easy.
In this political season it is easy to see how such deepfakes may be used.
Countering disinformation in a deepfake world.
Disinformation attacks create the perfect storm on a global level by traversing hemispheres and social classes in a matter of moments.